Introduction

The current data collection is part of a broader attempt to pilot the design and to refine my hypotheses. The experiment itself is referred to as schemaVR1. The analyses that are pre-planned and post-hoc are clearly labelled.

This script was created to show the interim results. In order to be transparent to which extent I analysed the data before the conclusion of the data collection, I uploaded this version. This especially important because I did not pre-register the study. The aim of this pilot study is to collect sixteen participants in total, which will hopefully be done by the end of the week.

Personally, I will use this script to look at and check the results as soon as new data from participants is added to the dataset. This script covers only the presentation of results and the analyses. All pre-processing, concatenation and the actual updating of the dataset is done with this script, which also contains a detailed legend explaining all variable names. All relevant files will be uploaded in due course to my GitHub repository schemaVR1. Currently, I am not testing any of the various statistical assumptions, which will be part of the thorough data analysis as soon as I have finished data collection.

This version is commented and already tries to the discuss the findings because there already a lot of significant effects. However this by no means anyhting like a registered report or pre-print.

Procedure

If any one is actually interested, I copied the procedure section from my the draft of my first year report/proposal:

After testing a couple of participants, it became apparent that I need to add a familiarisation phase because the experience of immersive VR was as a new experience overwhelming. Therefore the participants had two minutes to get used to the feeling of being in a virtual world by being in the same environment that was used to leanr how to pick up objects (see below).

The encoding phase of the experiment will start with 45 sec in long exposure to a virtual room of 5.15 virtual meter (vm) by 4.4 vm (height 2.58 vm, which was designed to be a kitchen . A vm corresponds to one meter in the real word. Twenty objects are scattered around at twenty discrete locations so called spawn points. The same object will appear at the same location for all participants, but we collected expectancy values for all 400 combinations. Participants will freely walk around in the virtual room. They will be told that to memorise all moveable objects and their locations. To ensure that, participants will be instructed to count the objects. Furthermore, they are told that they need to make sure to look at the floor and in the corners of the room or otherwise they will miss objects. After the time elapsed, the screen turns black.

The next task serves to eliminate active rehearsal serving the same purpose than the reaction time task in Lew & Howe (2017), but also as a practice to pick up object in virtual reality. For that, the participants will stand on a virtual square plane surrounded by an infinite horizon. There are four big cubes (edge length 1 vm) in red, blue, green and yellow standing on the floor and four small cubes (edge length 0.5 vm) in the same colours. The experimenter will give the HTC Vive wand controller to the participants. These will be instructed to pick up the floating small cubes by inserting the controller into the object and continously pressing the trigger button, which attaches the object to the controller. The task is to place the small cubes on the big cubes of the same colour. To do this, the participants need to hold the object above big cube and release the trigger button so that the cube can fall down in the right position.

When this is finished, the kitchen will be loaded again and the 3D location recall task will begin. A trial (total number twenty) begins by the instantiation of an object at fixed location in the room, which is the same for all twenty objects. Like the small cubes, the objects are floating above the floor. This was done so that the participants do not have to bend to pick up an object. The participants will be instructed to pick the object up and place it at the location where the object was initially during encoding. The participant will be instructed to tell the experimenter when the final placement was made so that this person can log the response. Only the participant cannot remember the object itself, they are to indicated that they are guessing and the trial is marked as a no memory trial. The participant will be aware that a trial can be restarted with the object is retrievably lost for instance because it is stuck in another object in the VE, which is possible. The order of objects is pseudo-random. The following measures will be saved: placement coordinates (x, y, z), time from start of trial to the time the object is picked up, the time to place the object once it is picked up, the number of attempts to place the object and whether the response was guess (no memory trial) or a true answer. Once each of the twenty objects have been placed, the participants will be instructed to take off the VR goggles and sit on the chair in front of the computer in the same room to start the 3AFC location task.

On each trial in this task, the participant will be presented with three pictures showing the same object in three different locations of which the participant has to choose the correct one by pressing respective number on the keyboard, which will be displayed under each picture. After this, the participant will be asked to indicate their confidence for this decision on a 3-point scale (1 = Did not see object, 2 = Guess the object was there, 3 = Know the object was there) by pressing the corresponding key. The first option should only be used when the participant does not remember seeing this object.

As soon as this task has been completed, participants’ task will be asked to rate how likely it is that they would see the specific objects at specific locations in the kitchen they saw earlier. A generally unexpected object may be more or less expected depending on the location. After rating the twenty objects at the three locations, the partiicpant will be asked to rate the general expectancy of those objects anywhere in a kitchen. The object/location ratings by loading screen. The scale ranges from unexpected (-100) to expected (100). Move the mouse to move the slider across the scale and press the left mouse button to make your response. Press the spacebar to start.

Preliminary stuff

Libraries

library(plyr)
library(ggplot2)
library(lmerTest)
library(gridExtra)
library(grid)
library(knitr)

Data

# Loading data
load("data/mergedData/exp1Data.RData")

# Adding the normative object/location rating for each target and foil 1 & 2
combData$objLocTargetNorm <- 99
combData$objLocFoil1Norm  <- 99
combData$objLocFoil2Norm  <- 99
for(i in 1:dim(combData)[1]){
  combData$objLocTargetNorm[i] <- combData[i, paste("loc", combData$targetLocation[i], sep = "")]
  combData$objLocFoil1Norm[i]  <- combData[i, paste("loc", combData$foil1Location[i], sep = "")]
  combData$objLocFoil2Norm[i]  <- combData[i, paste("loc", combData$foil2Location[i], sep = "")]
}

Functions

pValue <-function(x, sign = '='){
  # To report a p-value in text.
  if (inherits(x, "lm")){
    s <- summary.lm(x)
    x <- pf(s$fstatistic[1L], s$fstatistic[2L], s$fstatistic[3L], lower.tail = FALSE)
    if(x > 1){
      stop("There is no p-value greater than 1")
    } else if(x < 0.001){
      x.converted <- '< .001'
    } else{
      x.converted <- paste(sign,substr(as.character(round(x, 3)), 2,5))
    } 
  } else {
    if(x > 1){
      stop("There is no p-value greater than 1")
    } else if(x < 0.001){
      x.converted <- '< .001'
    } else{
      x.converted <- paste(sign,substr(as.character(round(x, 3)), 2,5))
    } 
  }
  return(x.converted)
}

rValue <-function(x){
  # To report a correlation coeffiecient in text
  if (inherits(x, "lm")){
    r.squared <- summary(x)$r.squared
    x.converted <- paste('=',substr(as.character(round(r.squared, 3)), 2,5)) 
  } else {
    if (x < 0){
      x.converted <- paste('= -',substr(as.character(abs(round(x, 3))), 2,5), sep = '') 
    } else {
      x.converted <- paste('=',substr(as.character(abs(round(x, 3))), 2,5)) 
    }
  }
  return(x.converted) 
}

sigStars <- function(x){
  # Adding stars to indicate significance
  stars <- rep("", length(x))
  stars[x < 0.1   & x > 0.05]   <- '.' # trend
  stars[x < 0.05  & x > 0.01]   <- '*'
  stars[x < 0.01  & x > 0.001]  <- '**'
  stars[x < 0.001 & x > 0.0001] <- '***'
  return(stars)
}

createResultTable <- function(x){
  # Creating a nice looking table
  if(inherits(x, "glmerMod")){
    # For glmer table
    xTable        <- summary(x)$coefficients
    xTable        <- data.frame(xTable)
    xTable[, 1]   <- round(xTable[, 1], 2)
    xTable[, 2]   <- round(xTable[, 2], 2)
    xTable[, 3]   <- round(xTable[, 3], 2)
    xTable[, 4]   <- round(xTable[, 4], 4)
    xTable        <- cbind(xTable, sigStars(xTable[, 4]))
    names(xTable) <- c('Estimate', 'SE', 'Z', 'P', 'Sig')
  } else if(inherits(x, 'merModLmerTest')){
    xTable        <- summary(x)$coefficients
    xTable        <- data.frame(xTable)
    xTable[, 1]   <- round(xTable[, 1], 2)
    xTable[, 2]   <- round(xTable[, 2], 2)
    xTable[, 3]   <- round(xTable[, 3], 2)
    xTable[, 4]   <- round(xTable[, 4], 2)
    xTable[, 5]   <- round(xTable[, 5], 4)
    xTable        <- cbind(xTable, sigStars(xTable[, 5]))
    names(xTable) <- c('Estimate', 'SE', 'DF', 'T', 'P', 'Sig')
  } else if(inherits(x, 'anova')){
    if(attributes(x)$heading == "Analysis of Variance Table of type III  with  Satterthwaite \napproximation for degrees of freedom"){
      # Only ANOVA on lmerTest models with Satterthwaite approximation
      xTable        <- data.frame(x)
      xTable[, 1]   <- round(xTable[, 1], 2)
      xTable[, 2]   <- round(xTable[, 2], 2)
      xTable[, 3]   <- round(xTable[, 3], 2)
      xTable[, 4]   <- round(xTable[, 4], 2)
      xTable[, 5]   <- round(xTable[, 5], 2)
      xTable[, 6]   <- round(xTable[, 6], 4)
      xTable        <- cbind(xTable, sigStars(xTable[, 6]))
      names(xTable) <- c('SS', 'MSS', 'nDF', 'dDF', 'F', 'P', 'Sig') 
    } else {
      xTable <- data.frame('######', 'No known model', '######')
      names(xTable) <- c('%%%', '***', '&&&')
    }
  } else {
    xTable <- data.frame('######', 'No known model', '######')
    names(xTable) <- c('%%%', '***', '&&&')
  }
  return(xTable)
}

Population & demographics

# Aggregating to get summary of demographics
aggDemo   <- ddply(combData, c('subNum', 'gender'), summarise, age = mean(age))
aggGender <- table(aggDemo$gender)

Currently, there are 9 participants in this dataset, of which 4 are female and 5 are male. The mean age of the participants is 26.78 (SD = 3.73) years. Due a handling error, one recall trial was not recorded for one participant. For the same participant, there is also no information available whether the participant actually saw the object (i.e. no memory trial). Nevertheless, I included this person in this analysis.

Overall performance

Analysis of no memory trials and confidence decisions

# Legend: 1 = Did not see object| 2 = Guess the object was there | 3 = Know the object was there
table1 <- table(combData$resCon)

# Looking at confidence response for each participant
#table2 <- ddply(combData, c('subNum'), summarise, noMemory = sum(recallNoMemory, na.rm = TRUE), notSeen = table(resCon)[1], guessed = table(resCon)[2], knew = table(resCon)[3])
#kable(table2)
# This information is not available online protect to protect anonymity.

# Looking at no memory trials and confidence responses for each object
table3 <- ddply(combData, c('objName', 'objNum'), summarise, 
                noMemory = sum(recallNoMemory, na.rm = TRUE), 
                notSeen  = table(resCon)[1], 
                guessed  = table(resCon)[2], 
                knew     = table(resCon)[3])

result0 <- cor.test(table3$noMemory, table3$notSeen, method = 'pearson')
kable(table3)
objName objNum noMemory notSeen guessed knew
bowl of fruits 5 0 1 4 4
bread 9 2 1 3 5
calendar 19 1 1 2 6
dishes 12 1 3 4 2
fan 20 1 0 5 4
glass jug 10 1 3 3 3
hat 17 5 1 6 2
helmet 18 2 1 4 4
kitchen roll 2 0 1 6 2
knife 7 2 3 6 0
microwave 1 1 2 6 1
mixer 8 2 0 5 4
mug 11 3 1 4 4
pile of books 15 0 0 6 3
saucepan 3 1 2 6 1
tea pot 6 0 1 5 3
toaster 4 0 2 5 2
towels 13 2 2 5 2
toy 14 1 1 3 5
umbrella 16 2 2 2 5

The number of trials during recall on which participants indicated that they did not see the object was 27 (17 %). Note that the instructions for one participant were different and therefore the number is not completely accurate. Furthermore, there were 28 ‘Did not see object’, 90 ‘Guess the object was there’ and 90 ‘Know the object was there’ responses in total. Note that one participant was instructed to report a no memory trial if this person did not see the object or absolutely did not know where to place it. All other participants were instructed to report a no memory trial only if there is no memory of the object itself. Additionally, two participants did not receive the general instruction to begin the encoding phase by broadly looking around in the room to make sure they do not run out of time because they focussed on individual objects. This could explain why their number of no memory trials was quite high. Other than that there seem to be no irregularities except that no memory trials do not occur as often as 3AFC trials, on which the participant indicated that they ‘Did not see the object’. The corresponding pearson correlation coefficients is r = .009, p = .97. The reason for discordance is not immediately apparent.

Memory performance

# Creating subsets for analysing recall and 3AFC, for which participants have item memory and then
# aggregating across participants to analyse overall memory performance
recallCombDataSub <- subset(combData, combData$recallNoMemory == 0 | is.na(combData$recallNoMemory))
agg1              <- ddply(recallCombDataSub, 
                           c('subNum'), 
                           summarise, 
                           accRecall = mean(accRecall, na.rm = TRUE))

afcCombDataSub    <- subset(combData, combData$resCon != 1)
agg2              <- ddply(afcCombDataSub, 
                           c('subNum'), 
                           summarise, 
                           accAFC = mean(accAFC))


plot1 <- ggplot(agg1, aes(x = "accRecall" , y = accRecall)) + 
                geom_boxplot(alpha = 0.5, width = 0.5) +
                geom_dotplot(binaxis='y', stackdir='center') +
                labs(y = 'Recall rate', 
                     x = ' ', 
                     title = 'A) Recall performance') + 
                coord_cartesian(ylim = c(0, 1), xlim = c(0.9, 1.1)) + 
                theme(plot.margin = margin(10, 10, 10, 10), 
                      axis.title.x=element_blank(),
                      axis.text.x=element_blank(),
                      axis.ticks.x=element_blank())

plot2 <- ggplot(agg2, aes(x = "accAFC" , y = accAFC)) + 
                geom_boxplot(alpha = 0.5, width = 0.5) +
                geom_dotplot(binaxis='y', stackdir='center') + 
                geom_hline(yintercept = 1/3) + 
                annotate('text', x = 0.95, y = (1/3) - 0.03, label = 'Chance (1/3)') + 
                labs(y = 'Accuracy (3AFC)', 
                     x = ' ', 
                     title = 'B) 3AFC performance') + 
                coord_cartesian(ylim = c(0, 1), xlim = c(0.9, 1.1)) + 
                theme(plot.margin = margin(10, 10, 10, 10), 
                      axis.title.x=element_blank(),
                      axis.text.x=element_blank(),
                      axis.ticks.x=element_blank())

# Arrange in grid and plot
grid.arrange(plot1, plot2, ncol = 2)

Recall accuracy

result1 <- t.test(agg1$accRecall)

Accuracy of recall was calculated by finding the shortest Euclidean distance and checking whether the location that has the shortest distance is the target location. The mean recall rate was 0.38 (SD = 0.21), which is significantly above zero, t(8) = 5.42, p < .001. In contrast to the 3AFC task, it is not straightforward to calculate a chance niveau for the recall task. Therefore I used zero.

Recall bias

# Finding the expectancy of the closest location for an object
combData$closestObjLocNorm <- NA
# Add normative location rating
for(i in 1:dim(combData)[1]){
  try(combData$closestObjLocNorm[i] <- combData[i, paste("loc", combData$closestLoc[i], sep = "")])
  # It was necessary to use try() because there are NA values, which cannot be used to index. 
}

recallBiasData <- ddply(subset(combData, recallNoMemory == 0), 
                        c('subNum', 'accRecall'), 
                        summarise, 
                        closestObjLocNorm = mean(closestObjLocNorm, na.rm = TRUE))

recallBias     <- ddply(recallBiasData, 
                        c('accRecall'), 
                        summarise, 
                        mean = mean(closestObjLocNorm, na.rm = TRUE), 
                        sd = sd(closestObjLocNorm, na.rm = TRUE))

One idea was that if participants did not remember the location for particular object, they would place it at an expected location because they are biased to these location during retrieval. However the current the mean normative object/location expectancy is lower for incorrectly placed objects, 7.39 (SD = 15.92), than for correctly placed objects, 20.72 (SD = 17.17). Therefore there is no evidence for a recall bias to more expected location.

3AFC accuracy

result2 <- t.test(agg2$accAFC - 1/3)

The mean 3AFC accuracy rate was 0.69 (SD = 0.13), which is significantly higher than 0.33, t(8) = 8.12, p < .001.

Relationship between recall and 3AFC performance

plot3 <- ggplot(data.frame(accRecall = agg1$accRecall, accAFC = agg2$accAFC), aes(x  = accRecall, y = accAFC)) +
  geom_smooth(method = 'lm') + 
  geom_point() + 
  labs(y = '3AFC accuracy', 
       x = 'Recall rate', 
       title = 'Relationship between memory measures')

result3 <- cor.test(agg1$accRecall, agg2$accAFC)
plot3

Performance in the recall task is significantly correlated with performance in the 3AFC task, r = .817, p = .007. This important because I was not completely sure whether the way I calculated recall accuracy is correct and sensitive enough to measure what I want to measure because the possible locationsare not, for instance, equidistant and moreover there are a some locations closely cluster in areas of the virtual kitchen.

Analysing expectancy

# Aggregating over all objects without bothering whether the participants actually saw the objects. This however is done for the statistical analysis (see below). 
objectAgg <- ddply(combData, 
                   c('objNum', 'objName'), 
                   summarise, 
                   generalRatingNorm  = mean(generalRatingNorm),
                   generalRatingPost  = mean(generalRatingPost),
                   afc                = mean(accAFC),
                   targetRankPre      = mean(targetRankPre),
                   objLocTargetRating = mean(objLocTargetRating),
                   recall             = mean(accRecall, na.rm = TRUE), 
                   objLocTargetNorm   = mean(objLocTargetNorm),
                   euclideanDist      = mean(accRecall, na.rm = TRUE),
                   answerTime         = mean(answerTime, na.rm = TRUE))

# Displaying relationship between average object/location expectancy values from nomative and participant data.
plot4 <- ggplot(objectAgg, aes(x  = objLocTargetNorm, y = objLocTargetRating)) +
                geom_smooth(method = 'lm') + 
                geom_point() + 
                labs(y = 'Participant data', 
                     x = 'Normative data', 
                     title = 'A) Object/location expectancy')

# Displaying relationship between average general expectancy values from nomative and participant data.
plot5 <- ggplot(objectAgg, aes(x  = generalRatingNorm, y = generalRatingPost)) +
                geom_smooth(method = 'lm') + 
                geom_point() + 
                labs(y = 'Participant data', 
                     x = 'Normative data', 
                     title = 'B) General expectancy')

# Calculating the correlations  
result4 <- cor.test(objectAgg$objLocTargetNorm, objectAgg$objLocTargetRating)
result5 <- cor.test(objectAgg$generalRatingNorm, objectAgg$generalRatingPost)

# Plot
grid.arrange(plot4, plot5, ncol = 2)

For object/location expectancy, the correlation between normative data and average participants ratings for objects at their encoding locations is not significant, r = .203, p = .391, which is not completely surprising because the participants in the normative study rated a much higher number of stimuli and did not complete a memory test. On the other, the correlation for the general expectancy values for objects at their encoding locations is significant and very high, r = .968, p < .001, which might be explained due to the fact that both groups of participants rated same number of objects in terms of their general expectancy.

Analysing effect of expectancy on memory

Note that the subsequent plots are only illustrations of the models because they averaged data across objects, while all the statistical models are trial based and only trials go into the analyses, on which the participant did not indicated that they did not see the object in other words have item memory.

As for my specific predictions, this is a quote from a draft of my first year report, where I specify my hypotheses:

My main hypothesis is that there is an U-shape relationship between schema-expectancy and memory. However when recall is used as an assay to test memory, there is always the confound that participants might be biased to the more expected locations. In that respect, I predict to see that memory precision is enhanced for highly expected object/locations. However if the participants does not remember the location the participant is more likely to place the object close to an expected location (i.e. a spawn point) than to an unexpected location. In contrast, I predict a U-shaped relationship for performance in the 3AFC task (unexpected > neutral < expected) because the retrieval effects are reduced as all three options are chosen to have similar expectancy values.

Concerning the models used in the experiments: my plan/idea was originally to include random intercepts and slopes for the objects and for the participants, but in simulations I had problems to actually retrieve the true parameters (see here). Furthermore, my supervisor pointed out that adding random effects for the objects might take away the effect of their expectancy. Therefore, I, for now, decided to only include a random intercept for each participant. The quadratic term is included to look for the predicted U-shaped relationship. However of course, a significant quadratic term amount to a U-shaped relationship.

As you will see, sometimes I get the exact opposite result of what I have expected (see e.g. model 10). One of the confound for these models is that all the objects which are expected in a kitchen are at the end of the scale in terms of their object/location expectancy, but all, by definition, highly expected in a kitchen in general, which seems to be associated with lower memory performance. Therefore, I post-hoc decided to add the corresponding other forms of expectancy to the model as a covariate. There will also be a second experiment, where objects that are highly expected in kitchen over the whole range of object/location expectancy. All the other models expect the ones predicting the time to place the objects were pre-planned. I looked at this variable because in a recent study using immersive VR from Draschkow & Võ (2017) an interesting relationship between ‘scene grammar’ and object handling time was found.

# Preparation
# scaling for analysis
combDataScaled                    <- combData
combDataScaled$objLocTargetRating <- scale(combData$objLocTargetRating)
combDataScaled$targetRankPre      <- scale(combData$targetRankPre)
combDataScaled$generalRatingNorm  <- scale(combData$generalRatingNorm)
combDataScaled$generalRatingPost  <- scale(combData$generalRatingPost)
combDataScaled$objLocTargetNorm   <- scale(combDataScaled$objLocTargetNorm)

# Adding kitchen relevance. Note that 12 objects were choosen because they are expected in a kitchen, while the other 8 object were not expected in a kitchen.
objectAgg$expectedInKitchen                                  <- 'low'
objectAgg[which(objectAgg$objNum < 13), 'expectedInKitchen'] <- 'highly'

3AFC models

# Plots
# Participant object/location expectancy versus 3AFC
plot6 <- ggplot(objectAgg, aes(x = objLocTargetRating, y = afc)) + 
                geom_text(aes(label = objNum, colour = expectedInKitchen), hjust = 0, vjust = 0) + 
                geom_smooth() +  
                labs(y = 'Mean accuracy (3AFC)', 
                     x = "Post-ratings expectancy", 
                     title = 'Model 1: Object/location expectancy') +
                coord_cartesian(ylim = c(0, 1.25), xlim = c(-100, 100), expand = TRUE) + 
                theme(plot.margin = unit(c(1,7,1,1), "lines"), legend.position = c(1.25, 0.8))

plot7 <- ggplot(objectAgg, aes(x = targetRankPre, y = afc)) + 
                geom_text(aes(label = objNum, colour = expectedInKitchen), hjust = 0, vjust = 0) + 
                geom_smooth() +  
                labs(y = 'Mean accuracy (3AFC)', 
                     x = "Normative expectancy (ranked)", 
                     title = 'Model 2: Object/location expectancy') + 
                coord_cartesian(ylim = c(0, 1.25), xlim = c(0, 400), expand = TRUE) + 
                theme(plot.margin = unit(c(1,7,1,1), "lines"), legend.position = "none")

plot8 <- ggplot(objectAgg, aes(x = objLocTargetNorm, y = afc)) + 
                geom_text(aes(label = objNum, colour = expectedInKitchen), hjust = 0, vjust = 0) + 
                geom_smooth() +  
                labs(y = 'Mean accuracy (3AFC)', 
                     x = "Normative expectancy ", 
                     title = 'Model 3: Object/location expectancy') + 
                coord_cartesian(ylim = c(0, 1.25), xlim = c(-100, 100), expand = TRUE) + 
                theme(plot.margin = unit(c(1,7,1,1), "lines"), legend.position = "none")

plot9 <- ggplot(objectAgg, aes(x = generalRatingNorm, y = afc)) + 
                geom_text(aes(label = objNum, colour = expectedInKitchen), hjust = 0, vjust = 0) + 
                geom_smooth() +  
                labs(y = 'Mean accuracy (3AFC)', 
                     x = "Normative expectancy", 
                     title = 'Model 4: General expectancy') + 
                coord_cartesian(ylim = c(0, 1.25), xlim = c(-100, 100), expand = TRUE) + 
                theme(plot.margin = unit(c(1,7,1,1), "lines"), legend.position = "none")


grid.arrange(plot6, plot7, plot8, plot9, ncol = 2, nrow = 2)

# Legend
paste(as.character(objectAgg$objNum), "=", objectAgg$objName)
##  [1] "1 = microwave"      "2 = kitchen roll"   "3 = saucepan"      
##  [4] "4 = toaster"        "5 = bowl of fruits" "6 = tea pot"       
##  [7] "7 = knife"          "8 = mixer"          "9 = bread"         
## [10] "10 = glass jug"     "11 = mug"           "12 = dishes"       
## [13] "13 = towels"        "14 = toy"           "15 = pile of books"
## [16] "16 = umbrella"      "17 = hat"           "18 = helmet"       
## [19] "19 = calendar"      "20 = fan"
# Data
subAFC    <- subset(combDataScaled, combDataScaled$resCon != 1)

# Models 
# Participant object/location expectancy versus 3AFC
model1 <- glmer(accAFC ~  objLocTargetRating + I(objLocTargetRating*objLocTargetRating) + (1 | subNum), 
                    data = subAFC, 
                    family = binomial, 
                    control = glmerControl(optimizer = "bobyqa"),
                    nAGQ = 1)
kable(createResultTable(model1))
Estimate SE Z P Sig
(Intercept) 0.79 0.28 2.83 0.0046 **
objLocTargetRating -0.07 0.18 -0.39 0.6932
I(objLocTargetRating * objLocTargetRating) 0.01 0.21 0.05 0.9613
# Normative (ranked) object/location expectancy versus 3AFC
model2 <- glmer(accAFC ~  targetRankPre + I(targetRankPre*targetRankPre) + (1 | subNum), 
                    data = subAFC, 
                    family = binomial, 
                    control = glmerControl(optimizer = "bobyqa"),
                    nAGQ = 1)

kable(createResultTable(model2))
Estimate SE Z P Sig
(Intercept) 0.99 0.30 3.32 0.0009 ***
targetRankPre -0.30 0.18 -1.65 0.0981 .
I(targetRankPre * targetRankPre) -0.17 0.24 -0.73 0.4640
# Normative object/location expectancy versus 3AFC
model3 <- glmer(accAFC ~  objLocTargetNorm + I(objLocTargetNorm*objLocTargetNorm) + (1 | subNum), 
                    data = subAFC, 
                    family = binomial, 
                    control = glmerControl(optimizer = "bobyqa"),
                    nAGQ = 1)

kable(createResultTable(model3))
Estimate SE Z P Sig
(Intercept) 0.47 0.25 1.90 0.0579 .
objLocTargetNorm 0.08 0.20 0.42 0.6714
I(objLocTargetNorm * objLocTargetNorm) 0.37 0.20 1.82 0.0681 .
# Normative general expectancy versus 3AFC
model4 <- glmer(accAFC ~  generalRatingNorm + I(generalRatingNorm*generalRatingNorm)  + (1 | subNum), 
                    data = subAFC, 
                    family = binomial, 
                    control = glmerControl(optimizer = "bobyqa"),
                    nAGQ = 1)

kable(createResultTable(model4))
Estimate SE Z P Sig
(Intercept) 1.10 0.34 3.23 0.0012 **
generalRatingNorm -0.64 0.29 -2.22 0.0267 *
I(generalRatingNorm * generalRatingNorm) -0.27 0.27 -1.01 0.3142

3AFC models with covariates

This is was decided post-hoc. They idea here was to adjust for the respective general or object/location expectancy. Another idea would be to control for answer time.

model5 <- glmer(accAFC ~  objLocTargetRating + 
                     I(objLocTargetRating*objLocTargetRating) + 
                     generalRatingPost +
                     (1 | subNum), data = combDataScaled, 
                    family = binomial, 
                    control = glmerControl(optimizer = "bobyqa"),
                    nAGQ = 1)

kable(createResultTable(model5))
Estimate SE Z P Sig
(Intercept) 0.45 0.26 1.70 0.0883 .
objLocTargetRating 0.06 0.17 0.34 0.7326
I(objLocTargetRating * objLocTargetRating) 0.22 0.20 1.08 0.2795
generalRatingPost -0.49 0.19 -2.62 0.0087 **
model6 <- glmer(accAFC ~  targetRankPre + 
                     I(targetRankPre*targetRankPre) + 
                     generalRatingNorm +
                     (1 | subNum), data = combDataScaled, 
                    family = binomial, 
                    control = glmerControl(optimizer = "bobyqa"),
                    nAGQ = 1)

kable(createResultTable(model6))
Estimate SE Z P Sig
(Intercept) 0.35 0.33 1.05 0.2932
targetRankPre -0.04 0.16 -0.26 0.7963
I(targetRankPre * targetRankPre) 0.32 0.29 1.12 0.2631
generalRatingNorm -0.58 0.23 -2.51 0.0119 *
model7 <- glmer(accAFC ~  objLocTargetNorm + 
                     I(objLocTargetNorm*objLocTargetNorm) + 
                     generalRatingNorm +
                     (1 | subNum), data = combDataScaled, 
                    family = binomial, 
                    control = glmerControl(optimizer = "bobyqa"),
                    nAGQ = 1)

kable(createResultTable(model7))
Estimate SE Z P Sig
(Intercept) 0.48 0.25 1.95 0.0512 .
objLocTargetNorm -0.01 0.18 -0.05 0.9631
I(objLocTargetNorm * objLocTargetNorm) 0.18 0.18 0.98 0.3293
generalRatingNorm -0.39 0.19 -2.06 0.0397 *
model8 <- glmer(accAFC ~  generalRatingNorm + 
                     I(generalRatingNorm*generalRatingNorm) + 
                     objLocTargetNorm +
                     (1 | subNum), data = combDataScaled, 
                    family = binomial, 
                    control = glmerControl(optimizer = "bobyqa"),
                    nAGQ = 1)

kable(createResultTable(model8))
Estimate SE Z P Sig
(Intercept) 0.98 0.31 3.13 0.0017 **
generalRatingNorm -0.67 0.26 -2.55 0.0108 *
I(generalRatingNorm * generalRatingNorm) -0.32 0.24 -1.33 0.1840
objLocTargetNorm 0.02 0.18 0.11 0.9099

3AFC interpretation

The most robust result is that 3AFC performance is much higher for objects that are generally unexpected in a kitchen. All forms of general expectancy are negatively associated with 3AFC perfomance. Notably though in model 3, there is a trend for a U-shaped relationship.

Recall models

The actual statistics are calculated only on trials, which were no no memory trials.

plot10 <- ggplot(objectAgg, aes(x = objLocTargetRating, y = recall)) + 
                 geom_text(aes(label = objNum, colour = expectedInKitchen), hjust = 0, vjust = 0) +  
                 geom_smooth() +  
                 labs(y = 'Mean accuracy (Recall)', 
                      x = "Post-ratings expectancy", 
                      title = ' Model 9: Object/location expectancy')  + 
                 coord_cartesian(ylim = c(0, 1), xlim = c(-100, 100), expand = TRUE) + 
                 theme(plot.margin = unit(c(1,7,1,1), "lines"), legend.position = c(1.25, 0.8))

plot11 <- ggplot(objectAgg, aes(x = targetRankPre, y = recall)) + 
                 geom_text(aes(label = objNum, colour = expectedInKitchen), hjust = 0, vjust = 0) + 
                 geom_smooth() +  
                 labs(y = 'Mean accuracy (Recall)', 
                      x = "Normative expectancy (ranked)", 
                      title = 'Model 10: Object/location expectancy') + 
                 coord_cartesian(ylim = c(0, 1), xlim = c(0, 400), expand = TRUE) + 
                 theme(plot.margin = unit(c(1,7,1,1), "lines"), legend.position = "none")

plot12 <- ggplot(objectAgg, aes(x = objLocTargetNorm, y = recall)) + 
                 geom_text(aes(label = objNum, colour = expectedInKitchen), hjust = 0, vjust = 0) + 
                 geom_smooth() +  
                 labs(y = 'Mean accuracy (Recall)', 
                      x = "Normative expectancy", 
                      title = 'Model 11: Object/location expectancy') + 
                 coord_cartesian(ylim = c(0, 1), xlim = c(-100, 100), expand = TRUE) + 
                 theme(plot.margin = unit(c(1,7,1,1), "lines"), legend.position = "none")

plot13 <- ggplot(objectAgg, aes(x = generalRatingNorm, y = recall)) + 
                 geom_text(aes(label = objNum, colour = expectedInKitchen), hjust = 0, vjust = 0) + 
                 geom_smooth() +  
                 labs(y = 'Mean accuracy (Recall)', 
                      x = "Normative expectancy", 
                      title = 'Model 12: General expectancy') + 
                 coord_cartesian(ylim = c(0, 1), xlim = c(-100, 100), expand = TRUE) + 
                 theme(plot.margin = unit(c(1,7,1,1), "lines"), legend.position = "none")


plot14 <- ggplot(objectAgg, aes(x = generalRatingPost, y = recall)) + 
                 geom_text(aes(label = objNum, colour = expectedInKitchen), hjust = 0, vjust = 0) + 
                 geom_smooth() +  
                 labs(y = 'Mean accuracy (Recall)', 
                      x = "Post-ratings expectancy", 
                      title = 'Model 13: General expectancy') + 
                 coord_cartesian(ylim = c(0, 1), xlim = c(-100, 100), expand = TRUE) + 
                 theme(plot.margin = unit(c(1,7,1,1), "lines"), legend.position = "none")

grid.arrange(plot10, plot11, plot12, plot13, plot14, ncol = 2, nrow = 3)

# Legend
paste(as.character(objectAgg$objNum), "=", objectAgg$objName)
##  [1] "1 = microwave"      "2 = kitchen roll"   "3 = saucepan"      
##  [4] "4 = toaster"        "5 = bowl of fruits" "6 = tea pot"       
##  [7] "7 = knife"          "8 = mixer"          "9 = bread"         
## [10] "10 = glass jug"     "11 = mug"           "12 = dishes"       
## [13] "13 = towels"        "14 = toy"           "15 = pile of books"
## [16] "16 = umbrella"      "17 = hat"           "18 = helmet"       
## [19] "19 = calendar"      "20 = fan"
# Data
subRecall <- subset(combDataScaled, combDataScaled$recallNoMemory == 0)

# Participant object/location expectancy versus recall
model9 <- glmer(accRecall ~ objLocTargetRating + 
                      I(objLocTargetRating*objLocTargetRating) + 
                      (1 | subNum), 
                    data = subRecall, 
                    family = binomial, 
                    control = glmerControl(optimizer = "bobyqa"),
                    nAGQ = 1)

kable(createResultTable(model9))
Estimate SE Z P Sig
(Intercept) -0.81 0.39 -2.05 0.0404 *
objLocTargetRating 0.20 0.20 0.99 0.3204
I(objLocTargetRating * objLocTargetRating) 0.15 0.25 0.63 0.5297
# Normative (ranked) object/location expectancy versus recall
model10 <- glmer(accRecall ~  targetRankPre + 
                       I(targetRankPre*targetRankPre) + 
                       (1 | subNum), 
                     data = subRecall, 
                     family = binomial, 
                     control = glmerControl(optimizer = "bobyqa"),
                     nAGQ = 1)

kable(createResultTable(model10))
Estimate SE Z P Sig
(Intercept) -0.17 0.40 -0.42 0.6741
targetRankPre -0.14 0.20 -0.70 0.4846
I(targetRankPre * targetRankPre) -0.51 0.28 -1.82 0.0693 .
# Normative object/location expectancy versus recall
model11 <- glmer(accRecall ~  objLocTargetNorm + 
                       I(objLocTargetNorm*objLocTargetNorm) + 
                       (1 | subNum), 
                     data = subRecall, 
                     family = binomial, 
                     control = glmerControl(optimizer = "bobyqa"),
                     nAGQ = 1)

kable(createResultTable(model11))
Estimate SE Z P Sig
(Intercept) -0.87 0.40 -2.19 0.0282 *
objLocTargetNorm 0.44 0.20 2.21 0.0272 *
I(objLocTargetNorm * objLocTargetNorm) 0.18 0.21 0.87 0.3847
# Normative general expectancy versus recall
model12 <- glmer(accRecall ~  generalRatingNorm + 
                       I(generalRatingNorm*generalRatingNorm)  + 
                       (1 | subNum), data = subRecall, 
                     family = binomial, 
                     control = glmerControl(optimizer = "bobyqa"),
                     nAGQ = 1)

kable(createResultTable(model12))
Estimate SE Z P Sig
(Intercept) -0.76 0.43 -1.77 0.0768 .
generalRatingNorm -0.77 0.30 -2.59 0.0095 **
I(generalRatingNorm * generalRatingNorm) 0.11 0.29 0.38 0.7074
# Participant general expectancy versus recall
model13 <- glmer(accRecall ~  generalRatingPost + 
                       I(generalRatingPost*generalRatingPost)  + 
                       (1 | subNum), 
                     data = subRecall, 
                     family = binomial, 
                     control = glmerControl(optimizer = "bobyqa"),
                   nAGQ = 1)

kable(createResultTable(model13))
Estimate SE Z P Sig
(Intercept) -0.58 0.41 -1.42 0.1559
generalRatingPost -0.76 0.32 -2.37 0.0179 *
I(generalRatingPost * generalRatingPost) -0.08 0.29 -0.29 0.7693

Recall models with covariates

# Participant object/location expectancy versus recall
model14 <- glmer(accRecall ~  
                      objLocTargetRating + 
                      I(objLocTargetRating*objLocTargetRating) + 
                      generalRatingPost +
                      (1 | subNum), 
                    data = subRecall, 
                    family = binomial, 
                    control = glmerControl(optimizer = "bobyqa"),
                    nAGQ = 1)

kable(createResultTable(model14))
Estimate SE Z P Sig
(Intercept) -1.09 0.43 -2.55 0.0107 *
objLocTargetRating 0.25 0.21 1.16 0.2469
I(objLocTargetRating * objLocTargetRating) 0.44 0.27 1.62 0.1042
generalRatingPost -0.80 0.23 -3.45 0.0006 ***
# Normative (ranked) object/location expectancy versus recall
model15 <- glmer(accRecall ~  targetRankPre + 
                       I(targetRankPre*targetRankPre) + 
                       generalRatingNorm +
                       (1 | subNum), 
                     data = subRecall, 
                     family = binomial, 
                     control = glmerControl(optimizer = "bobyqa"),
                     nAGQ = 1)

kable(createResultTable(model15))
Estimate SE Z P Sig
(Intercept) -1.28 0.55 -2.31 0.0210 *
targetRankPre 0.08 0.22 0.38 0.7041
I(targetRankPre * targetRankPre) 0.62 0.42 1.48 0.1393
generalRatingNorm -1.20 0.33 -3.58 0.0003 ***
# Normative object/location expectancy versus recall
model16 <- glmer(accRecall ~  objLocTargetNorm + 
                       I(objLocTargetNorm*objLocTargetNorm) + 
                       generalRatingNorm +
                       (1 | subNum), 
                     data = subRecall, 
                     family = binomial, 
                     control = glmerControl(optimizer = "bobyqa"),
                     nAGQ = 1)

kable(createResultTable(model16))
Estimate SE Z P Sig
(Intercept) -0.65 0.42 -1.55 0.1205
objLocTargetNorm 0.24 0.22 1.13 0.2588
I(objLocTargetNorm * objLocTargetNorm) -0.03 0.23 -0.14 0.8899
generalRatingNorm -0.78 0.24 -3.32 0.0009 ***
# Normative general expectancy versus recall
model17 <- glmer(accRecall ~  generalRatingNorm + 
                       I(generalRatingNorm*generalRatingNorm)  + 
                       objLocTargetNorm +
                       (1 | subNum), 
                     data = subRecall, 
                     family = binomial, 
                     control = glmerControl(optimizer = "bobyqa"),
                     nAGQ = 1)

kable(createResultTable(model17))
Estimate SE Z P Sig
(Intercept) -0.75 0.44 -1.72 0.0850 .
generalRatingNorm -0.72 0.30 -2.37 0.0178 *
I(generalRatingNorm * generalRatingNorm) 0.08 0.30 0.28 0.7764
objLocTargetNorm 0.24 0.22 1.10 0.2727
# Participant general expectancy versus recall
model18 <- glmer(accRecall ~  generalRatingPost + 
                       I(generalRatingPost*generalRatingPost)  + 
                       objLocTargetRating +
                       (1 | subNum), 
                     data = subRecall, 
                     family = binomial, 
                     control = glmerControl(optimizer = "bobyqa"),
                     nAGQ = 1)

kable(createResultTable(model18))
Estimate SE Z P Sig
(Intercept) -0.57 0.42 -1.36 0.1723
generalRatingPost -0.77 0.32 -2.37 0.0177 *
I(generalRatingPost * generalRatingPost) -0.10 0.29 -0.34 0.7342
objLocTargetRating 0.16 0.21 0.78 0.4339

Recall interpretation

Model 9 and 10 show a very unexpected inverted U-shaped relationship. However this might be due to the fact that again objects that are generally unexpected in kitchen are associated with better memory because these are the ones which are in the middle of the object/location expectancy scale. Model 11 looks very similar to model 3. When controlled for general expectancy a lot of models do show an trend for significant quadratic term (model 14 and 15).

Euclidean distance models

plot15 <- ggplot(objectAgg, aes(x = objLocTargetRating, y = euclideanDist)) + 
                 geom_text(aes(label = objNum, colour = expectedInKitchen), hjust = 0, vjust = 0) +  
                 geom_smooth() +  
                 labs(y = 'Euclidean distance in vm', 
                      x = "Post-ratings expectancy", 
                      title = ' Model 19: Object/location expectancy')  + 
                 coord_cartesian(ylim = c(0, 1), xlim = c(-100, 100), expand = TRUE) + 
                 theme(plot.margin = unit(c(1,7,1,1), "lines"), legend.position = c(1.25, 0.8))

plot16 <- ggplot(objectAgg, aes(x = targetRankPre, y = euclideanDist)) + 
                 geom_text(aes(label = objNum, colour = expectedInKitchen), hjust = 0, vjust = 0) + 
                 geom_smooth() +  
                 labs(y = 'Euclidean distance in vm', 
                      x = "Normative expectancy (ranked)", 
                      title = 'Model 20: Object/location expectancy') + 
                 coord_cartesian(ylim = c(0, 1), xlim = c(0, 400), expand = TRUE) + 
                 theme(plot.margin = unit(c(1,7,1,1), "lines"), legend.position = "none")

plot17 <- ggplot(objectAgg, aes(x = objLocTargetNorm, y = euclideanDist)) + 
                 geom_text(aes(label = objNum, colour = expectedInKitchen), hjust = 0, vjust = 0) + 
                 geom_smooth() +  
                 labs(y = 'Euclidean distance in vm', 
                      x = "Normative expectancy", 
                      title = 'Model 21: Object/location expectancy') + 
                 coord_cartesian(ylim = c(0, 1), xlim = c(-100, 100), expand = TRUE) + 
                 theme(plot.margin = unit(c(1,7,1,1), "lines"), legend.position = "none")

plot18 <- ggplot(objectAgg, aes(x = generalRatingNorm, y = euclideanDist)) + 
                 geom_text(aes(label = objNum, colour = expectedInKitchen), hjust = 0, vjust = 0) + 
                 geom_smooth() +  
                 labs(y = 'Euclidean distance in vm', 
                      x = "Normative expectancy", 
                      title = 'Model 22: General expectancy') + 
                 coord_cartesian(ylim = c(0, 1), xlim = c(-100, 100), expand = TRUE) + 
                 theme(plot.margin = unit(c(1,7,1,1), "lines"), legend.position = "none")

plot19 <- ggplot(objectAgg, aes(x = generalRatingPost, y = euclideanDist)) + 
                 geom_text(aes(label = objNum, colour = expectedInKitchen), hjust = 0, vjust = 0) + 
                 geom_smooth() +  
                 labs(y = 'Euclidean distance in vm', 
                      x = "Post-ratings expectancy", 
                      title = 'Model 23: General expectancy') + 
                 coord_cartesian(ylim = c(0, 1), xlim = c(-100, 100), expand = TRUE) + 
                 theme(plot.margin = unit(c(1,7,1,1), "lines"), legend.position = "none")

grid.arrange(plot15, plot16, plot17, plot18, plot19, ncol = 2, nrow = 3)

# Legend
paste(as.character(objectAgg$objNum), "=", objectAgg$objName)
##  [1] "1 = microwave"      "2 = kitchen roll"   "3 = saucepan"      
##  [4] "4 = toaster"        "5 = bowl of fruits" "6 = tea pot"       
##  [7] "7 = knife"          "8 = mixer"          "9 = bread"         
## [10] "10 = glass jug"     "11 = mug"           "12 = dishes"       
## [13] "13 = towels"        "14 = toy"           "15 = pile of books"
## [16] "16 = umbrella"      "17 = hat"           "18 = helmet"       
## [19] "19 = calendar"      "20 = fan"
# Model 19: 
model19 <- lmer(euclideanDist ~  objLocTargetRating + 
                  I(objLocTargetRating*objLocTargetRating) + 
                  (1 | subNum), 
                data = subRecall)

kable(createResultTable(model19))
Estimate SE DF T P Sig
(Intercept) 1.84 0.28 12.21 6.54 0.0000
objLocTargetRating -0.29 0.13 125.71 -2.27 0.0248 *
I(objLocTargetRating * objLocTargetRating) -0.34 0.16 128.94 -2.12 0.0355 *
# Model 20: 
model20 <- lmer(euclideanDist ~  targetRankPre + 
                  I(targetRankPre*targetRankPre) + 
                  (1 | subNum), 
                data = subRecall)

kable(createResultTable(model20))
Estimate SE DF T P Sig
(Intercept) 1.52 0.31 13.74 4.87 0.0003 ***
targetRankPre -0.02 0.13 123.13 -0.17 0.8673
I(targetRankPre * targetRankPre) -0.01 0.17 123.33 -0.04 0.9668
# Model 21: 
model21 <- lmer(euclideanDist ~  objLocTargetNorm + 
                  I(objLocTargetNorm*objLocTargetNorm) + 
                  (1 | subNum), 
                data = subRecall)

kable(createResultTable(model21))
Estimate SE DF T P Sig
(Intercept) 1.42 0.29 10.46 4.85 0.0006 ***
objLocTargetNorm -0.22 0.12 122.84 -1.82 0.0714 .
I(objLocTargetNorm * objLocTargetNorm) 0.09 0.13 122.90 0.73 0.4658
# Model 22: 
model22 <- lmer(euclideanDist ~  generalRatingNorm + 
                  I(generalRatingNorm*generalRatingNorm) + 
                  (1 | subNum), 
                data = subRecall)

kable(createResultTable(model22))
Estimate SE DF T P Sig
(Intercept) 1.71 0.32 12.92 5.39 0.0001
generalRatingNorm -0.05 0.19 123.85 -0.28 0.7831
I(generalRatingNorm * generalRatingNorm) -0.21 0.18 124.17 -1.18 0.2404
# Model 23: 
model23 <- lmer(euclideanDist ~  generalRatingPost + 
                  I(generalRatingPost*generalRatingPost) + 
                  (1 | subNum), 
                data = subRecall)

kable(createResultTable(model23))
Estimate SE DF T P Sig
(Intercept) 1.65 0.31 13.26 5.36 0.0001
generalRatingPost -0.13 0.20 127.06 -0.64 0.5229
I(generalRatingPost * generalRatingPost) -0.14 0.19 128.05 -0.76 0.4503

Euclidean distance models with covariates

# Model 24: 
model24<- lmer(euclideanDist ~  objLocTargetRating + 
                  I(objLocTargetRating*objLocTargetRating) + 
                  generalRatingPost +
                  (1 | subNum), 
                data = subRecall)

kable(createResultTable(model24))
Estimate SE DF T P Sig
(Intercept) 1.86 0.28 12.61 6.56 0.0000
objLocTargetRating -0.30 0.13 124.70 -2.28 0.0244 *
I(objLocTargetRating * objLocTargetRating) -0.35 0.16 127.95 -2.15 0.0333 *
generalRatingPost 0.05 0.14 123.51 0.39 0.6952
# Model 25: 
model25 <- lmer(euclideanDist ~  targetRankPre + 
                  I(targetRankPre*targetRankPre) + 
                  generalRatingNorm + 
                  (1 | subNum), 
                data = subRecall)

kable(createResultTable(model25))
Estimate SE DF T P Sig
(Intercept) 1.69 0.35 19.66 4.86 0.0001
targetRankPre -0.06 0.13 122.08 -0.44 0.6616
I(targetRankPre * targetRankPre) -0.19 0.24 122.38 -0.81 0.4172
generalRatingNorm 0.21 0.18 122.07 1.15 0.2506
# Model 26: 
model26 <- lmer(euclideanDist ~  objLocTargetNorm + 
                  I(objLocTargetNorm*objLocTargetNorm) + 
                  generalRatingNorm + 
                  (1 | subNum), 
                data = subRecall)

kable(createResultTable(model26))
Estimate SE DF T P Sig
(Intercept) 1.40 0.30 11.11 4.71 0.0006 ***
objLocTargetNorm -0.20 0.13 122.05 -1.57 0.1197
I(objLocTargetNorm * objLocTargetNorm) 0.11 0.14 121.90 0.83 0.4085
generalRatingNorm 0.07 0.14 122.23 0.46 0.6489
# Model 27: 
model27 <- lmer(euclideanDist ~  generalRatingNorm + 
                  I(generalRatingNorm*generalRatingNorm) + 
                  objLocTargetNorm +
                  (1 | subNum), 
                data = subRecall)

kable(createResultTable(model27))
Estimate SE DF T P Sig
(Intercept) 1.69 0.32 12.88 5.31 0.0001
generalRatingNorm -0.10 0.19 122.88 -0.54 0.5914
I(generalRatingNorm * generalRatingNorm) -0.19 0.18 123.11 -1.04 0.3023
objLocTargetNorm -0.19 0.13 121.97 -1.44 0.1528
# Model 28: 
model28 <- lmer(euclideanDist ~  generalRatingPost + 
                  I(generalRatingPost*generalRatingPost) + 
                  objLocTargetRating +
                  (1 | subNum), 
                data = subRecall)

kable(createResultTable(model28))
Estimate SE DF T P Sig
(Intercept) 1.63 0.31 13.02 5.29 0.0001
generalRatingPost -0.12 0.20 125.92 -0.61 0.5431
I(generalRatingPost * generalRatingPost) -0.13 0.19 126.95 -0.66 0.5080
objLocTargetRating -0.22 0.13 123.39 -1.72 0.0878 .

Euclidean distance interpretation

The same is true if the Euclidean distance is the dependent variable. Model 24 is interesting as it continues to show the signifcant quadratic effect even though the general expectancy is controlled for.

Answer time models

plot15 <- ggplot(objectAgg, aes(x = objLocTargetRating, y = answerTime)) + 
                 geom_text(aes(label = objNum, colour = expectedInKitchen), hjust = 0, vjust = 0) +  
                 geom_smooth() +  
                 labs(y = 'Time to place object in sec', 
                      x = "Post-ratings expectancy", 
                      title = ' Model 19: Object/location expectancy')  + 
                 coord_cartesian(ylim = c(0, 20), xlim = c(-100, 100), expand = TRUE) + 
                 theme(plot.margin = unit(c(1,7,1,1), "lines"), legend.position = c(1.25, 0.8))

plot16 <- ggplot(objectAgg, aes(x = targetRankPre, y = answerTime)) + 
                 geom_text(aes(label = objNum, colour = expectedInKitchen), hjust = 0, vjust = 0) + 
                 geom_smooth() +  
                 labs(y = 'Time to place object in sec', 
                      x = "Normative expectancy (ranked)", 
                      title = 'Model 20: Object/location expectancy') + 
                 coord_cartesian(ylim = c(0, 20), xlim = c(0, 400), expand = TRUE) + 
                 theme(plot.margin = unit(c(1,7,1,1), "lines"), legend.position = "none")

plot17 <- ggplot(objectAgg, aes(x = objLocTargetNorm, y = answerTime)) + 
                 geom_text(aes(label = objNum, colour = expectedInKitchen), hjust = 0, vjust = 0) + 
                 geom_smooth() +  
                 labs(y = 'Time to place object in sec', 
                      x = "Normative expectancy", 
                      title = 'Model 21: Object/location expectancy') + 
                 coord_cartesian(ylim = c(0, 20), xlim = c(-100, 100), expand = TRUE) + 
                 theme(plot.margin = unit(c(1,7,1,1), "lines"), legend.position = "none")

plot18 <- ggplot(objectAgg, aes(x = generalRatingNorm, y = answerTime)) + 
                 geom_text(aes(label = objNum, colour = expectedInKitchen), hjust = 0, vjust = 0) + 
                 geom_smooth() +  
                 labs(y = 'Time to place object in sec', 
                      x = "Normative expectancy", 
                      title = 'Model 22: General expectancy') + 
                 coord_cartesian(ylim = c(0, 20), xlim = c(-100, 100), expand = TRUE) + 
                 theme(plot.margin = unit(c(1,7,1,1), "lines"), legend.position = "none")

plot19 <- ggplot(objectAgg, aes(x = generalRatingPost, y = answerTime)) + 
                 geom_text(aes(label = objNum, colour = expectedInKitchen), hjust = 0, vjust = 0) + 
                 geom_smooth() +  
                 labs(y = 'Time to place object in sec', 
                      x = "Post-ratings expectancy", 
                      title = 'Model 23: General expectancy') + 
                 coord_cartesian(ylim = c(0, 20), xlim = c(-100, 100), expand = TRUE) + 
                 theme(plot.margin = unit(c(1,7,1,1), "lines"), legend.position = "none")

grid.arrange(plot15, plot16, plot17, plot18, plot19, ncol = 2, nrow = 3)

# Legend
paste(as.character(objectAgg$objNum), "=", objectAgg$objName)
##  [1] "1 = microwave"      "2 = kitchen roll"   "3 = saucepan"      
##  [4] "4 = toaster"        "5 = bowl of fruits" "6 = tea pot"       
##  [7] "7 = knife"          "8 = mixer"          "9 = bread"         
## [10] "10 = glass jug"     "11 = mug"           "12 = dishes"       
## [13] "13 = towels"        "14 = toy"           "15 = pile of books"
## [16] "16 = umbrella"      "17 = hat"           "18 = helmet"       
## [19] "19 = calendar"      "20 = fan"
# Model 19: 
model19 <- lmer(answerTime ~  objLocTargetRating + 
                  I(objLocTargetRating*objLocTargetRating) + 
                  (1 | subNum), 
                data = subRecall)

kable(createResultTable(model19))
Estimate SE DF T P Sig
(Intercept) 11.50 1.54 15.87 7.45 0.0000
objLocTargetRating 0.08 0.84 127.43 0.10 0.9234
I(objLocTargetRating * objLocTargetRating) -1.09 1.01 129.99 -1.07 0.2848
# Model 20: 
model20 <- lmer(answerTime ~  targetRankPre + 
                  I(targetRankPre*targetRankPre) + 
                  (1 | subNum), 
                data = subRecall)

kable(createResultTable(model20))
Estimate SE DF T P Sig
(Intercept) 10.27 1.64 20.41 6.27 0.0000
targetRankPre 0.78 0.80 123.26 0.98 0.3306
I(targetRankPre * targetRankPre) 0.18 1.10 123.62 0.16 0.8697
# Model 21: 
model21 <- lmer(answerTime ~  objLocTargetNorm + 
                  I(objLocTargetNorm*objLocTargetNorm) + 
                  (1 | subNum), 
                data = subRecall)

kable(createResultTable(model21))
Estimate SE DF T P Sig
(Intercept) 9.77 1.49 13.55 6.57 0.0000
objLocTargetNorm -1.20 0.78 122.71 -1.54 0.1254
I(objLocTargetNorm * objLocTargetNorm) 0.65 0.82 122.84 0.80 0.4277
# Model 22: 
model22 <- lmer(answerTime ~  generalRatingNorm + 
                  I(generalRatingNorm*generalRatingNorm) + 
                  (1 | subNum), 
                data = subRecall)

kable(createResultTable(model22))
Estimate SE DF T P Sig
(Intercept) 10.39 1.60 19.42 6.48 0.0000
generalRatingNorm 2.12 1.16 124.57 1.83 0.0694 .
I(generalRatingNorm * generalRatingNorm) -0.11 1.11 125.13 -0.10 0.9215
#kable(createResultTable(anova(model22)))

# Model 23: 
model23 <- lmer(answerTime ~  generalRatingPost + 
                  I(generalRatingPost*generalRatingPost) + 
                  (1 | subNum), 
                data = subRecall)

kable(createResultTable(model23))
Estimate SE DF T P Sig
(Intercept) 10.56 1.66 16.77 6.37 0.0000
generalRatingPost 1.81 1.25 128.58 1.45 0.1495
I(generalRatingPost * generalRatingPost) -0.29 1.17 129.54 -0.24 0.8080

Answer time models with covariates

# Model 24: 
model24<- lmer(answerTime ~  objLocTargetRating + 
                  I(objLocTargetRating*objLocTargetRating) + 
                  generalRatingPost +
                  (1 | subNum), 
                data = subRecall)

kable(createResultTable(model24))
Estimate SE DF T P Sig
(Intercept) 12.05 1.58 15.51 7.62 0.0000
objLocTargetRating -0.01 0.81 125.92 -0.01 0.9898
I(objLocTargetRating * objLocTargetRating) -1.83 1.02 128.90 -1.79 0.0765 .
generalRatingPost 2.45 0.87 124.36 2.82 0.0055 **
# Model 25: 
model25 <- lmer(answerTime ~  targetRankPre + 
                  I(targetRankPre*targetRankPre) + 
                  generalRatingNorm + 
                  (1 | subNum), 
                data = subRecall)

kable(createResultTable(model25))
Estimate SE DF T P Sig
(Intercept) 13.40 1.84 30.53 7.28 0.0000
targetRankPre 0.11 0.79 122.13 0.14 0.8854
I(targetRankPre * targetRankPre) -3.24 1.43 122.66 -2.27 0.0251 *
generalRatingNorm 3.82 1.08 122.15 3.53 0.0006 ***
# Model 26: 
model26 <- lmer(answerTime ~  objLocTargetNorm + 
                  I(objLocTargetNorm*objLocTargetNorm) + 
                  generalRatingNorm + 
                  (1 | subNum), 
                data = subRecall)

kable(createResultTable(model26))
Estimate SE DF T P Sig
(Intercept) 8.94 1.48 14.85 6.04 0.0000
objLocTargetNorm -0.50 0.80 122.12 -0.62 0.5371
I(objLocTargetNorm * objLocTargetNorm) 1.30 0.84 121.86 1.56 0.1224
generalRatingNorm 2.43 0.90 122.45 2.71 0.0076 **
# Model 27: 
model27 <- lmer(answerTime ~  generalRatingNorm + 
                  I(generalRatingNorm*generalRatingNorm) + 
                  objLocTargetNorm +
                  (1 | subNum), 
                data = subRecall)

kable(createResultTable(model27))
Estimate SE DF T P Sig
(Intercept) 10.34 1.61 19.57 6.43 0.0000
generalRatingNorm 2.00 1.18 123.65 1.69 0.0930 .
I(generalRatingNorm * generalRatingNorm) -0.05 1.12 124.08 -0.04 0.9676
objLocTargetNorm -0.46 0.81 122.09 -0.56 0.5747
# Model 28: 
model28 <- lmer(answerTime ~  generalRatingPost + 
                  I(generalRatingPost*generalRatingPost) + 
                  objLocTargetRating +
                  (1 | subNum), 
                data = subRecall)

kable(createResultTable(model28))
Estimate SE DF T P Sig
(Intercept) 10.58 1.66 16.82 6.37 0.0000
generalRatingPost 1.80 1.25 127.57 1.44 0.1533
I(generalRatingPost * generalRatingPost) -0.32 1.18 128.57 -0.27 0.7898
objLocTargetRating 0.38 0.79 124.34 0.47 0.6369

Answer time interpretation

Here only models that include the respetive covariate are significant but with interesting resuls. For instance model 25 show that when controlled for general expectancy there is an inverted U-shaped relationship, which needs to be inverted if one assummes that shorter answer time are indicative of better memory.

Further analysis

Factorisation

Here I tried the classical way of analysing creating factor (unexpected, neutral and expected) and then aggregating across participants and that factor.

# Based on post-encoding ratings
combData2                      <- combData
combData2$postRatingTargetRank <- ddply(combData2, 
                                        c('subNum'), 
                                        summarise, 
                                        objLocTargetRating = objLocTargetRating, 
                                        rank = rank(objLocTargetRating))$rank

combData2$expFact <- 2
combData2$expFact[which(combData2$postRatingTargetRank <= 7)]  <- 1
combData2$expFact[which(combData2$postRatingTargetRank >= 14)] <- 3
combData2$expFact <- factor(combData2$expFact, labels = c("unexpected", "neutral", "expected"))

combData2Exp <- ddply(combData2, 
                      c('subNum', 'expFact'), 
                      summarise,
                      N = length(accAFC),
                      mean = mean(accAFC),
                      ratingsMean = mean(objLocTargetRating))

# It is important to aggregate data, which was already aggregated.
combData2ExpAgg <- ddply(combData2Exp, 
                         c('expFact'), 
                         summarise,
                         N = length(mean),
                         accAfcMean = mean(mean),
                         accAfcSd = sd(mean),
                         accAfcSe = accAfcSd/sqrt(N))

ggplot(combData2Exp, aes(x = expFact, y = mean)) +
  geom_dotplot(binaxis='y', stackdir='center') +
  stat_summary(fun.y = mean, geom = "line", aes(group=1))  +
  labs(y = 'Mean accuracy (3AFC)', 
       x = "Expectancy", 
       title = 'Relationship with post-encoding ratings') + 
  theme(plot.margin = margin(10, 10, 10, 10))

# Based on post-encoding ratings
combData3               <- combData
combData3$targetRankPre <- ddply(combData3, 
                                 c('subNum'), 
                                 summarise, 
                                 targetRankPre = targetRankPre, 
                                 rank = rank(targetRankPre))$rank

combData3$expFact <- 2
combData3$expFact[which(combData3$targetRankPre <= 7)] <- 1
combData3$expFact[which(combData3$targetRankPre >= 14)] <- 3
combData3$expFact <- factor(combData3$expFact, labels = c("unexpected", "neutral", "expected"))

combData3Exp <- ddply(combData3, 
                      c('subNum', 'expFact'), 
                      summarise,
                      N = length(accAFC),
                      mean = mean(accAFC),
                      ratingsMean = mean(targetRankPre))

# It is important to aggregate data, which was already aggregated.
combData3ExpAgg <- ddply(combData3Exp, 
                         c('expFact'), 
                         summarise,
                         N = length(mean),
                         accAfcMean = mean(mean),
                         accAfcSd = sd(mean),
                         accAfcSe = accAfcSd/sqrt(N))

ggplot(combData3Exp, aes(x = expFact, y = mean)) +
   geom_dotplot(binaxis='y', stackdir='center') +
  stat_summary(fun.y = mean, geom = "line", aes(group=1))  +
  labs(y = 'Mean accuracy (3AFC)', 
       x = "Expectancy", 
       title = 'Relationship with pre-encoding ranks') + 
  theme(plot.margin = margin(10, 10, 10, 10))

References

Draschkow, D., & Võ, M. L. (2017). Scene grammar shapes the way we interact with objects, strengthens memories, and speeds search. Scientific Reports, 7(1), 1–12.

Lew, A. R., & Howe, M. L. (2017). Out of place, out of mind: Schema-driven false memory effects for object-location bindings. Journal of Experimental Psychology: Learning Memory and Cognition, 43(3), 404–421. Retrieved from http://doi.apa.org/getdoi.cfm?doi=10.1037/xlm0000317 http://files/2109/Lew2016.pdf